Goto

Collaborating Authors

 animal decision-making


Inferring learning rules from animal decision-making

Neural Information Processing Systems

This remains an elusive question in neuroscience. Whereas reinforcement learning often focuses on the design of algorithms that enable artificial agents to efficiently learn new tasks, here we develop a modeling framework to directly infer the empirical learning rules that animals use to acquire new behaviors. Specifically, this allows us to: (i) compare different learning rules and objective functions that an animal may be using to update its policy; (ii) estimate distinct learning rates for different parameters of an animal's policy; (iii) identify variations in learning across cohorts of animals; and (iv) uncover trial-to-trial changes that are not captured by normative learning rules. After validating our framework on simulated choice data, we applied our model to data from rats and mice learning perceptual decision-making tasks. We found that certain learning rules were far more capable of explaining trial-to-trial changes in an animal's policy.


Review for NeurIPS paper: Inferring learning rules from animal decision-making

Neural Information Processing Systems

Weaknesses: As it is pointed out by the author (line 148-151), the result strongly relies on the correct assumption of the learning model to be REINFORCE, which I think it's a very strong assumption. It would be better supported by literature, showing animals can/are doing similar learning. Also as the authors pointed out, their model is descriptive. As the nature of a descriptive model, I feel like I don't gain much insight from the model of how animals learn. For example, the authors found a non-zero update to the bias weight on incorrect trial, which explains the "incorrect" bahevior of repeatedly choosing the wrong option. This sounds like a "noise" in the behavior to me and the model also does not explain it further besides it being noise.


Review for NeurIPS paper: Inferring learning rules from animal decision-making

Neural Information Processing Systems

I want to thank the authors for preparing the detailed rebuttal. This paper was discussed among all the reviewers during the post-rebuttal discussion phase. Overall, all the reviewers are excited about the research topic on inferring the learning rule of animals. There was a clear consensus that the paper should be accepted. The rebuttal did help clarify some of the reviewers' questions and steer their decisions towards acceptance.


Inferring learning rules from animal decision-making

Neural Information Processing Systems

This remains an elusive question in neuroscience. Whereas reinforcement learning often focuses on the design of algorithms that enable artificial agents to efficiently learn new tasks, here we develop a modeling framework to directly infer the empirical learning rules that animals use to acquire new behaviors. Specifically, this allows us to: (i) compare different learning rules and objective functions that an animal may be using to update its policy; (ii) estimate distinct learning rates for different parameters of an animal's policy; (iii) identify variations in learning across cohorts of animals; and (iv) uncover trial-to-trial changes that are not captured by normative learning rules. After validating our framework on simulated choice data, we applied our model to data from rats and mice learning perceptual decision-making tasks. We found that certain learning rules were far more capable of explaining trial-to-trial changes in an animal's policy.